IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024 4429
How Does Automation Shape the Process
of Narrative Visualization: A Survey of Tools
Qing Chen , Shixiong Cao , Jiazhe Wang ,andNanCao , Member, IEEE
Abstract—In recent years, narrative visualization has gained
much attention. Researchers have proposed different design spaces
for various narrative visualization genres and scenarios to facilitate
the creation process. As users’ needs grow and automation tech-
nologies advance, increasingly more tools have been designed and
developed. In this study, we summarized six genres of narrative vi-
sualization (annotated charts, infographics, timelines & storylines,
data comics, scrollytelling & slideshow, and data videos) based on
previous research and four types of tools (design spaces, authoring
tools, ML/AI-supported tools and ML/AI-generator tools) based
on the intelligence and automation level of the tools. We surveyed
105 papers and tools to study how automation can progressively
engage in visualization design and narrative processes to help users
easily create narrative visualizations. This research aims to provide
an overview of current research and development in the automa-
tion involvement of narrative visualization tools. We discuss key
research problems in each category and suggest new opportunities
to encourage further research in the related domain.
Index Terms—Authoring tools, automatic visualization, data
visualization, design space, narrative visualization, survey.
I. INTRODUCTION
D
ATA visualization has been broadly applied to communi-
cate data and information in an effective and expressive
manner. Recently, an emerging trend has been to combine nar-
rative and storytelling with visualization [1]. The norms of com-
municative and exploratory information visualization are used
in narrative visualizations in order to tell the desired story [2].
However, creating visualizations with narrative information is a
challenging and time-consuming task. Such a creation usually
requires data analytic skills and visualization design expertise.
Even experts need to spend a considerable amount of time and
effort to create the ideal visualization. Therefore, by summariz-
ing the experience in practice, researchers specify various design
spaces and visualization scenarios for distinct narrative genres,
which are used to guide users to create narrative visualizations.
Manuscript received 23 June 2022; revised 7 March 2023; accepted 20 March
2023. Date of publication 27 March 2023; date of current version 1 July 2024.
This work was supported in part by the NSFC under Grants 62002267, 62072338,
and 62061136003, in part by NSF Shanghai under Grant 23ZR1464700, and in
part by Shanghai Education Development Foundation “Chen-Guang Project”
under Grant 21CGA75. Recommended for acceptance by S. Bruckner. (Corre-
sponding author: Nan Cao.)
Qing Chen, Shixiong Cao, and Nan Cao are with the Intelligent Big
Data Visualization Lab, Tongji University, Shanghai 200070, China (e-mail:
qingchen@tongji.edu.cn; caoshixiong@tongji.edu.cn; nan.cao@tongji.edu.cn).
Jiazhe Wang is with the Ant Group, Shanghai Hangzhou 310000, China
(e-mail: jiazhe.wjz@antgroup.com).
Digital Object Identifier 10.1109/TVCG.2023.3261320
With the emergence of new user requirements and the ad-
vancement of automation technology, an increasing number of
intelligent tools have been created to assist the visual creative
process. Authoring tools offer rich interactions that allow users
to adequately control the creation process. However, such tools
still require users to decide on each visualization element man-
ually. To further weaken the barriers and reduce the burdens
of creation, researchers have developed ML/AI-supported tools
and ML/AI-generator tools to support a more automatic process.
ML/AI-supported tools usually provide recommendations as
part of the narrative visualization creation process. Normally,
users need to make their own design choices to achieve the design
outcome. However, ML/AI-generator tools do not require user
expertise in visualization and can generate a complete set of
visualization designs without user intervention.
Recent surveys on automated techniques have focused on
traditional statistical charts [3], [4], [5]. Automatic tools that
support various genres of narrative visualizations have not been
sufficiently investigated. However, systematic reviews on how
(and to what extent) automation shapes visual design and visual
narrative processes are generally lacking. The narrative process
describes the primary responsibilities and actions of data visual-
ization storytellers and the types of artifacts that come from these
activities [6]. In addition, most previous studies aim at the cre-
ation process from the visual design level. Advances in artificial
intelligence and human-computer interaction have brought more
opportunities and challenges to this field. Therefore, a state-of-
the-art survey is required to provide a better understanding of
automation involvement in narrative visualization creation tools.
To fill this gap, we collected 91 design spaces and tools
covering the six genres of narrative visualization and classified
them into four automation levels, allowing us to describe how
automatic techniques could be progressively used in visualiza-
tion design and visual narrative, further allowing users to create
data visualizations. By analyzing the tools of each narrative
visualization genre, we compared the focus of the four levels of
tools in each narrative genre in order for users to easily choose the
appropriate tool to create according to different scenarios. Fur-
thermore, we identified both mature and less-explored research
directions for automated visual narrative tools and presented
new research problems and future work to assist researchers
in advancing their grasp of the subject matter and pursuing
their investigations. In addition to the state-of-art survey, we
developed an interactive browser to facilitate the exploration
and presentation of the collected design spaces and tools at
http://autovis.idvxlab.com/.
© 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
https://creativecommons.org/licenses/by/4.0/
4430 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
II. RELATED SURVEY AND TAXONOMY
In this section, we first perform a literature review on narrative
visualization. Then, we introduce papers that are most relevant to
our work. Then, we present our survey scope and methodology.
Finally, we describe the taxonomy of this survey.
A. Narrative Visualization and Storytelling Process
Our research is influenced by the emergence of narrative
visualization theories and visual storytelling technologies. Ac-
cording to Segel et al. [98] , narrative visualization comprises
three essential components: narrative genres, narrative struc-
ture (methods for organizing plot or information), and visual
narrative (visual methods for generating story experiences and
transmitting messages).
Hullman et al. [74] summarized how automated sequencing
can assist users in making organized choices when creating
narrative visuals. According to Lee et al. [6], aiming to achieve
the goal of message delivery, visual data stories must have a
collection of narrative segments backed by data and presented
in a coherent order. In addition, the process of creating narrative
visualizations is not always linear. This approach can be roughly
categorized into three steps: investigating the data, making a
story, and telling the story. Tong et al. [99] surveyed the literature
on storytelling in visualization, covering the logical concepts of
who is the subject of the narrative visual (creation tool and audi-
ence), how the story is told (narration and transition), and why
we can use visual narratives (memory and interpretation). The
goal for this research is to provide researchers and practitioners
with an overview of the development and research for various
narrative visualization tools.
B. Related Survey
This section outlines the surveys related to automated vi-
sualization techniques and tools. Wang et al. [3] surveyed 88
papers on ML4VIS and explained seven main processes of
machine learning techniques applied to visualization: Data Pro-
cessing4VIS, Data-VIS Mapping, Insight Communication, Style
Imitation, VIS Interaction, VIS Reading, and User Profiling.
Wu et al. [4] reviewed recent advances in artificial intelligence
techniques applied to visual data, examining a number of key
research questions related to the development and management
of visual data and the support provided by artificial intelligence
for these operations. The study by Zhu et al. [5] is the most
relevant to us, in which they investigated automated visualization
techniques for infographics. However, no previous work has
thoroughly analyzed different levels of automation and how
those tools help the design and creation process of visual sto-
rytelling in different narrative forms. Our effort seeks to give
an overview of available design tools that may assist a variety
of users in various design situations. Moreover, through the
analysis, we identify directions that remain undeveloped for
future research.
C. Survey Scope and Methodology
Our research focuses on narrative visualization tools. Tong
et al. [99] emphasized in their research that narrative visualiza-
tion focuses more on information visualization than scientific
visualization. In addition, studies on narrative scientific visual-
ization have been limited; therefore, scientific visualization was
excluded from our study.
To create the corpus of articles, we gathered from visualization
journals and conferences by using reference-driven and search-
driven methods. We started with a collection of references on
the categorization of narrative visualization in this area for the
selection of reference-driven, and we then broadened the focus
by looking up both citing and cited publications. We completed
two rounds of article gathering for the search-based choices. A
preliminary search for narrative visualizations, relevant design
tools, and best practices was conducted in the first round by
using high-impact visualization conferences and publications.
In particular, we selected five conferences (ACM CHI, IEEE
InfoVis, IEEE VAST, IEEE PacificVis and IV) and five journals
(IEEE TVCG, IEEE CGA, and ACM Transactions on Graphics,
Computer Graphics Forum, Visual Informatics). We gathered
a variety of publications by using two search terms (“visual-
ization” and “design space/design guide, “visualization” and
“authoring tool”) and then evaluated abstracts and full texts to
narrow down our sample.
After this round of article selection, 348 papers and tools were
obtained. To achieve a more precise review of the literature about
narrative visualization, we used narrative visualization genres
and tools (e.g., “data comics” and “design space/authoring
tool, “infographics” and “design space/authoring tools, etc.) to
categorize t he papers. Furthermore, we removed programming
tools and domain-specific application tools, as they are beyond
the scope of this research. Finally, 91 narrative visualization
papers and tools are summarized in Table I and Fig. 1.In
Table I and Fig. 1, we excluded 14 commercial software mainly
because most of them do not have a definite publication date,
and commercial software tends to have frequent updates and
additional features, which makes it difficult to fix a specific year.
D. Taxonomy
In this section, we will first describe the four levels of automa-
tion and then introduce the detailed classification of narrative
visualization in our survey.
1) Tool Classification Method: In this section, we categorize
the visualization tools into four groups based on their automation
and intelligence[5], [37].
Design space is a conceptual set of possibilities rather
than a software tool [100]. Design space stresses the
ability to choose from a variety of possibilities and
investigate alternatives [101], [102]. The design space is a de-
scription of all potential design options throughout the design
process. Utilizing basic design principles from current visual-
ization techniques is the most preferred method for building a
design space [103]. Moreover, visual design spaces allow us to
capture some implicit knowledge of graphic designers [104].
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4431
TABLE I
T
HE DESIGN SPACES AND TOOLS OF MAJOR NARRATIVE VISUALIZATION GENRES
Fig. 1. Number of relevant research publications or tools in different genres
for narrative visualization in chronological order.
We believe that the visualization design space is an attempt to
understand how visualizations are created by designers in a sys-
tematic process, that is, to decompose a design work into several
design elements and arrange them properly. A straightforward
design space makes the design more structured and disciplined,
allowing designers to create designs without relying on pure
feelings. It is also the basis for computers to understand the
design and eventually create tools to facilitate the design process.
Authoring tool encapsulates key software functionalities
and features for content creation [105]. It is an appli-
cation or tool designed for a specific design purpose.
Authoring tools allow users to create visualizations freely with
interactive features. They usually require designers to design
starting from scratch, allowing designers to have major control
of the creation process. On the users’ side, authoring tools
allow them to understand the creation framework in advance
and eventually interact with the system.
ML/AI-supported tools apply intelligent algorithms to
facilitate visualization creation. Such tools ease visual
generation while ensuring a certain degree of control for
user in the creation process. ML/AI-supported tools focus on
automatically providing some steps or automatically visualizing
some elements, while users need to make decisions on some
important steps to create the visualization. A recommended solu-
tion is usually provided for a particular part of the visualization.
Eventually, users can organize the design content to form the
final visualization outcome.
ML/AI-generator tool is even more intelligent, as users
no longer need to participate decision making in the
authoring process. The ML/AI-generator tool is designed
to reduce barriers for amateurs to create visualizations automati-
cally and ease the burdens for experts to search and select without
manually specifying all elements [5]. When the user uploads the
data, this type of tool automates the process and analysis of the
data and can generate a complete set of visual design solutions
without user intervention.
2) Visual Classification Method: Segel et al. [98] presented
seven genres of narrative visualization: magazine-style, anno-
tated chart, partitioned poster, flowchart, comic strip, slideshow,
film/video/animation. Recently, Roth [75] classified visual sto-
rytelling into seven genres: static visual stories, long-form in-
fographics, dynamic slides, narrative animations, multimedia
visual experiences, personalized story maps, and compilations
(compilations provide a “visual abstract” that typically links to
further text) [75]. On the basis of their findings and the pre-
sentation outcomes, we reclassified the narrative visualization
genres. In this research, magazine style, partitioned posters, and
static visual stories were jointly studied and then collectively
referred to as infographics. Film/video/animation, narrated an-
imations, and multimedia visual experiences are called data
videos. Slideshow, compilations, and long-form infographics are
4432 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
collectively called “scrollytelling & slideshow”. In the literature
review, we found only a few works about flowcharts. However,
many works on timelines & storylines are presented in the form
of flowcharts; thus, we jointly utilized flowcharts and timelines.
Roth et al. [75] found that personalized story maps are similar
to adding annotations to maps; in this study, we classified them
as annotated charts. In conclusion, we focused on six genres
of narrative visualizations in this survey: annotated charts,
infographics, timelines & storylines, data comics, scrollytelling
& slideshow, and data videos.
We surveyed the literature [98], [99] to further summarize
various tools with different narrative orderings and interactiv-
ity. Segel et al. [98] summarized three kinds of ordering for
narrative visualization: linear (the author specifies this path),
random access (no path is s pecified), and user-directed (users
may choose a route from various available pathways or design
their own). Tong et al. [99] added another ordering type called
parallel (multiple paths can be displayed simultaneously). Apart
from the 38 references listed in the Design Space category in
Table I, some studies proposed techniques or algorithms without
developing a fully functional visualization tool with appropriate
interfaces. Therefore, we selected the 36 visualization tools
that include interactive functions and support the creation of
narrative structures for each narrative genre. Their narrative
orderings and interactivity are also marked in Fig. 2.
As shown in Fig. 2, most tools support linear ordering, and
relatively few support random ordering. On average, the tools
for annotated charts support the fewest narrative ordering types,
while the tools for timelines and data videos support the most
narrative ordering types. Segel et al. [98] proposed six types of
interactions for narrative structures, of which hover highlighting,
filtering/selection/search, and navigation buttons are the three
most common interaction types. As we explored the selected
tools, we discovered two standard interaction types: scrolling
which includes landscape and portrait scrolling, and drawing,
which supports “touch+pen” interaction.
III. A
NNOTATED CHART
Annotated charts use graphics (arrows or trend lines) or
text (data values or commentary) to supplement infor-
mation, adding contextual information to a visualization
to supplement or introduce the meaning of the data. Annota-
tions allow audiences to focus on specific content or critical
information while retaining complete details of the contextual
data [106], [107].
Design space: Adding annotations to visualizations makes
the detailed information more accessible to users and improves
the memorability of the images [8]. Borkin et al. [8] applied
eye movement studies and cognitive experimental techniques
to verify that adding captions and annotated text to charts
can communicate visual information more effectively. When
annotating charts, both the form of the annotation (text, shapes,
highlights, and pictures) and the kind of desired annotation
(data items, coordinate spaces, structural chart components, and
previous annotations) must be considered [10]. In addition, Kong
et al. [9] defined annotation as a visual cue. They divided the
Fig. 2. A summary of interactive tools in each narrative genre, with the
supported narrative orderings and interactions of each tool.
annotations into two categories: internal cues that modify the
existing image by highlighting or downplaying the focus area
(i.e., the context) and external cues that add supplementary
elements (e.g., outlines, annotations and glyphs) to the existing
image to emphasize the focus. They showed that internal cues
are often more effective in directing attention than external cues.
Internal cues affect the current picture by highlighting the focal
region or de-emphasizing the rest of the visualization.
Authoring tool: Researchers have developed a range of vi-
sual programming libraries and packages for diagram annota-
tions [108], [109]. These tools require users to have program-
ming skills, while programming tools can only provide asyn-
chronous feedback to designers. To help create chart annotation
more easily, researchers have developed authoring tools that
have appropriate interfaces and can provide feedback to users,
which significantly facilitates the annotating process without
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4433
Fig. 3. Selected examples of annotated charts’ design spaces and tools. (1) Design space: an experiment on visualizations’ recognition and recall. The study
shows that adding captions and annotating text improves user attention and recall [8]. (2) Authoring tool: ChartAccent [10], which allows one to manually and
interactively generate data annotations. (3) ML/AI-supported tool: Kori’s [15] Tools viewport. As the user enters text, Kori automatically prompts for potential
references (gray underlining). Simple interactions to manually create links are also supported. (4) ML/AI-generator tool: annotations generated by Contextifier
[20].
requiring specialized programming knowledge. Tableau [110]
provides several basic options for annotating charts. For exam-
ple, the tool allows users to add trend lines to charts. User-created
annotations via text can be data-driven but are limited to some
standard forms of annotation. ChartAccent [10] is an interactive
tool that allows users to generate data annotations manually. It
offers many functions, s uch as highlighting markers, which are
more straightforward and flexible than Tableau [110]. Selected
markers can be highlighted directly without affecting unselected
markers. Although these tools can easily create annotations,
they still rely largely on the designer’s expertise to create
manually.
ML/AI-supported tool: ML/AI-supported tools of annotated
charts reduce manual operations by automatically providing
annotated suggestions via user interactions. SmartCues [19],
which provides multitouch interaction, is a library that supports
details-on-demand via dynamic computational overlays to assist
users in building queries and generating data-aware annotations.
Touch2Annotate [11] and Click2Annotate [12] are early semi-
automatic annotation generators. Touch2Annotate [11] is a tool
for adding annotations to multidimensional data visualizations
on a multitouch interface. The tool provides annotation tem-
plates and allows users to create high-quality chart annotations
by simply highlighting the data and selecting the appropri-
ate annotation template according to the annotated content.
Click2Annotate [12] allows simple data analysis and generates
easy-to-understand annotations. The semantic information en-
coded in its annotations can be browsed and retrieved. Simi-
larly, Kandogan [13] proposed just-in-time descriptive analysis,
where interacting with a diagram automatically annotates it.
Latif et al. [15] developed Kori based on a design space
analysis of textual and graphical references and added visual-
ization genres, such as line charts, pie charts, and maps. When
users create visualizations with the tool, the system automat-
ically provides annotation suggestions using natural language
and enables combining text and graphs via manual interaction.
Kong et al. [17] proposed an automated system that overlays
user-selected graphics onto existing chart bitmaps and allows
users to customize published visualizations by identifying visual
markers and attributes of axes of encoded data to better assist
users with chart reading tasks. Srinivasan et al. [18] explored
the potential applications of interactive data facts for visual data
exploration and communication. The researchers also developed
the Voder system to demonstrate how users can use interactive
data facts to suggest optional visualizations and modifications,
which helps users interpret the visualizations and convey their
findings. Bryan et al. [14] focused on narrative visualizations
for multivariate, time-varying datasets. They proposed a method
called Temporal Summary Images (TSI) consisting of temporal
layout, data snapshots in the form of comic strips, and textual
annotations. Moreover, researchers have noted that line graphs
are the most common type of visualization in daily life [111].
4434 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
However, some line charts are deceptive with exaggeration, un-
derstatement, and message reversal. For example, exaggerating
or minimizing the effect size via aspect ratio manipulation in
line charts leads to deceptive representation [16]. To address
this problem, Fan et al. [16] introduced a tool for detecting and
annotating line graphs in the wild that reads line graph images
and outputs text and visual annotations to assess the truthfulness
of line graphs and help readers understand faithful line charts.
Compared with authoring tools, ML/AI-supported tools fur-
ther simplify the difficulty of creating annotated visual dia-
grams and reduce manual operations by automatically providing
annotation suggestions. Furthermore, ML/AI-supported tools
allow users to promptly add annotations to the diagram while
interacting with the visualization based on AI assistance.
ML/AI-generator tool: As annotations are essential in visual-
ization design, researchers have explored annotation approaches
for different visualization genres. The Contextifier [20] provides
an algorithm for selecting annotations that automatically creates
a stock timeline graph and matches the appropriate annotation
to the line graph by referring to the content in the news article.
Liu et al. [21] developed AutoCaption to build a scheme to
accomplish the task of diagram title generation by using deep
neural networks. One-dimensional residual neural network is
used to analyze the relationships between visualization ele-
ments, identify essential features of the visualization diagram,
and generate a complete description. Both tools create t he ap-
propriate information for the diagram without user intervention.
Summary: Annotations are informative additions to visual
diagrams and are an essential part of visual design, helping
audiences quickly understand diagram information and help-
ing analysts revisit and reuse analysis processes conducted in
the past [112]. Researchers have verified the importance of
annotation at the visual memory level [7] and at the cognitive
level [15], which both indicate that annotations are an integral
part of visualization design. Although researchers have studied
the layout problem of annotated charts and the distraction caused
by repeatedly switching views by using interactive highlighting
[113], solutions to occlusion problems, such as annotations
blocking the charts, have not yet been addressed. Therefore,
more advanced techniques and tools are required to improve
the efficiency of the automatic layout. Moreover, for tools to
become more intelligent and accurate, the extraction of the
existing annotated diagram corpuses and the research related to
the identification and correction of incorrect annotations must
both be enhanced. Researchers have also developed various
tools based on annotated design spaces. Just-in-time annotations
and automated annotations provide a new method for users
to promptly update and convey visual information [106].In
the future, automated annotations can focus more on internal
annotations with the option of rich and aesthetically appealing
visual cues [5].
IV. I
NFOGRAPHIC
The term infographics, which stands for informational
graphics, refers to a type of visualization that focuses on
the use of graphically designed icons, images, colors, and
other elements to illustrate data and textual information. Otten
et al. [114] defined infographics as “to convey a particular set
of information to a specific audience by transforming complex
and abstract concepts into visual components.”
Design Space: Infographics are frequently utilized in a va-
riety of sectors because they are simple to comprehend and
can improve the viewer’s visual working memory [23], [115].
Different categories of infographics, information units, and pre-
sentation formats have been studied by researchers. Albers et
al. [116] summarized four types of infographics, including
bullet list infographics, snapshot infographics, flat information
infographics, and information flows. Infographics can also be
classified into static, dynamic, and interactive categories based
on their presentation forms.
A good infographic should be attractive, easy to understand,
and easy to remember [22]. Studies have found that audiences
usually form a primary impression of an infographic within the
first 500 milliseconds. This impression depends heavily on the
color and visual complexity of the page. Therefore, to increase
the appeal of infographics, designers should display them by
increasing the contrast between colors or selecting a limited
number of images with text [23]. However, an infographic is not
only a simple combination of graphics and text. Infographics af-
fect how well audiences remember information; when audiences
are pleased by infographics, they are more likely to remember
it over a longer time period [24]. The studies by Lan et al. [25]
showed that adding emotional factors to visual designs can create
better infographics. Other researchers point out that embedding
games into infographics encourages user interaction and im-
proves their exploration experience [26]. In addition, several
specific design guidelines for infographics are proposed. Dunlap
& Lowenthal [27] gave design recommendations on four levels:
overall design, structure, content, and infographics visuals.
Authoring Tool: Infographics have many advantages, but de-
signing infographics can be laborious for amateurs and time-
consuming even for experts. Numerous tools can be used to
create infographics in the design field, including Adobe Il-
lustrator [117],Sketch[118], and other vector drawing tools.
However, these tools do not support associating data with graph-
ics, suggesting the complexity involved in matching data with
graphics when used together to create data-driven infographics.
Researchers have developed specialized tools to solve this prob-
lem by binding data to vector graphics. For example, designers
can manually draw graphics and associate data with the created
graphics by using Data-Driven Guides (DDG) [28]. This tool
relieves the burdens of designers to manually code data into
custom graphics. Chartreuse [29] and InfoNice [30] help users
create evocative bar graphs with custom markers that convert
new bars into infographics with visual elements. Both tools
are integrated with Microsoft Office as plug-ins, lowering the
barrier to creating infographics. In addition to associating data
with vector graphics, DataQuilt [31] and Infomages [32] are
tools for binding data to bitmap images. In addition, certain
tools are integrated with the sketch functions, allowing users to
create designs more freely [33], [34], [35]. DataInk [34] provides
“pen+touch” interactions enabling designers to express their
creative thinking by drawing on a digital canvas and directly
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4435
Fig. 4. Selected examples of infographic’s design spaces and tools. (1) Design space: different types of infographics have different levels of appealtousers,the
scores are the means and standard deviations from user experiments using 9-point Likert scale) [23]. (2) Authoring tool: DDG vector drawing tool which can be
used to bind vector graphics to data [28]. (3) ML/AI-supported tool: utilizes a deep neural network using manually labeled infographics as training data to find
visual data items while ignoring creative aspects [36]. (4) ML/AI-generator tool: infographics are automatically generated by simulating online examples in two
main steps: retrieval (indexing of online instances based on visual elements) and matching (replacement with personal user data) [40].
matching their drawings to data. SketchStory [35] combines
real-time free-writing with interactive data charts, enabling pre-
senters to move and resize charts by touching the screen. This
feature facilitates the creation of personalized and expressive
data charts. Although all these tools can help create infographics,
most tools can only transform specific data types into specific
forms of visual charts, with line charts and bar charts being the
majority. Designers still need to reintegrate the design elements
and lay them out to form complete infographics.
ML/AI-Supported Tool: Lu et al. [36] built an infographic
visual flow search tool, VIF-Explorer, by analyzing many info-
graphics and extracting the Visual Information Flow (VIF) of
these images. However, this software can only analyze simple
infographics. Complex or nonstandard infographics with cre-
ative elements are challenging to identify and characterize. In-
fographics Wizard [37] can generate infographics with complex
layouts. The tool first recommends VIF layouts based on the
given information, then provides recommendations for visual
group (VG) designs, and finally generates connections between
VGs to complete the infographics. Visme [119], Infogram [120]
and Canva [121] are examples of more commercial types of
software. These web-based tools allow users to drag and drop
various images and graphic elements to create infographics of
the highest quality. Additionally, an infographic’s colors have a
significant impact on the audience’s first impression [23], [115].
InfoColorizer [38] allows users to employ color palettes to create
data-driven infographics.
In short, ML/AI-supported tools for infographics aim to
identify existing i nfographic layouts and color encodings and
match them to corresponding infographic recommendations.
While it could offer more design options and save efforts for
designers, the existing ML/AI-supported tools are not intelligent
enough to make creative and unique infographics similar to those
created by designers who use authoring tools.
ML/AI-Generator Tool: Text-to-Viz [39] generates infograph-
ics by natural language techniques with predefined schemes in
two steps: semantic parsing (identifying how this information
is described by casual users) and visual generation (layout,
descriptions, graphics, and colors). However, the tool is limited
in three aspects: the generability problem, which only supports
proportion facts; infographics expressiveness, which is based
on predesigned styles; and expression ambiguity, which the
current model cannot understand. Qian et al. [40] proposed
Retrieve-Then-Adapt to automatically generate infographics by
simulating Internet design works so that it can create richer
designs. Chen et al. [41] proposed a similar solution in that
it helps users turn existing timeline infographics into re-editable
templates. In the deconstruction phase, a multitask deep neural
network is used to parse the global and local information on
the timeline; in the reconstruction phase, the infographic is
then extended into an editable template by a channel technique.
These approaches identify and visualize accurate information
and ensure that the final generated infographic elements are
organized harmoniously.
4436 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
Fig. 5. Selected examples of timeline’s design spaces and tools. (1) Design space: Brehmer et al. [42] proposed that storytelling with a timeline encompasses
three levels of design space: representation, scale, and layout. (2) Authoring tool: Timeline Storyteller’s [49] working viewport, where the timeline canvas spans
the entire browser window. (3) ML/AI-supported tool Left: the working window of TimeLineCurator [55], a browser-based authoring tool. The diagram depicts
a chronology of Scandinavian pop music, with each hue denoting a different nation. Right: Example of a storyline visualization created using PlotThread [53].The
layouts are developed collaboratively by AI agents and designers, while styles and visual labels are manually modified to enhance the narrative.
Summary: Different types of tools have different focuses
for infographic design. Design spaces of infographics mainly
introduce the key components of a good infographics. For au-
thoring tools, the focus is on how to bind images with data.
ML/AI-supported tools and ML/AI-generator tools identify the
layout of existing infographics and apply or recommend it to
new infographics. Creating infographics with authoring tools
and ML/AI-supported tools requires users to know what the final
infographics look like, which can be challenging for amateurs.
ML/AI-generator tools are more friendly to amateur users. These
tools help users generate visualizations from data insights and
design aesthetics by using an automated approach that reduces
the complexity of the creative process and effectively increases
productivity.
Although a great deal of research has been conducted, much
work is still required in this category. The first direction is to
adapt current tools to more visualization genres. Existing tools
for converting standard statistical charts into infographics sup-
port only simple chart conversions [29], [30]. A more compre-
hensive visual corpus needs to be built to support a wider variety
of visualization genres in future work. The second direction is to
offer more advanced extraction and editing functions to existing
infographics. Some tools can identify design elements from
existing infographics, but only support simple visual charts [39],
[40], [41]. Meanwhile, the extraction of artistic effects in info-
graphics is still relatively weak and intelligent algorithms can be
applied to tackle this problem. Moreover, editing functions can
be added to infographic identification tools directly to reduce
the effort of switching between software. The third direction is
to enhance research on intelligent algorithms. Many rule-based
algorithms are applied in current tools (e.g., color selection [38]
and icon selection). The quality of infographics generated by
visualization systems can be further improved using more ad-
vanced machine learning or deep learning approaches.
V. T
IMELINE &STOR YLINE
Timeline and Storyline describe sequences of
events [42]. The most typical timeline has events
arranged horizontally according to their timestamps
and a horizontal axis used to represent time progression from
left to right [50]. In a storyline visualization, the narrative
unfolds from left to right; each person is represented as a
line. When two people interact at the exact moment, their two
lines intersect [45], [52]. As their presentations share many
resemblances, timelines and storylines are jointly discussed in
this section.
Design Space: Brehmer et al. [42] proposed that storytelling
with timelines contains three levels of representation (e.g.,
linear, radial, and grid), scale (e.g., relative and logarithmic),
and layout (e.g., unified and faceted). Moreover, by combining
these three levels, 20 timeline design options were identified to
match the narrative style. Lan et al. [43] identified six narrative
sequencing patterns (chronology, trace-back, trailer, recurrence,
halfway-back and anchor). The study results showed that nonlin-
ear narratives are more likely to increase user engagement and
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4437
that nonlinear narratives enable stories to be more expressive
without hindering comprehension. Bach et al. [44] proposed
the concept of time curves for nonlinear narrative visualization.
The aims of their work were to provide a general method for
producing straightforward visual summaries for a variety of
temporal datasets. The researchers describe the visual patterns
that time curves often display (i.e., cluster, transition, cycle,
U-turn, outlier, oscillation, and alternation) and how to interpret
them. Similarly, Kim et al. [48] suggested the use of story curves
to analyze and convey nonlinear narratives in film. Story curves
in this style may be used to establish the general ordering of
events by comparing the order of events in a film to their actual
chronological order.
However, storyline visualization is usually limited in that par-
ticipants cannot belong to two different groups simultaneously.
As a participant is represented as a line, multiple lines bundled
together at a time point usually indicate that they belong to the
same group at that time. However, when the participant belongs
to different groups simultaneously, for example, in co-author
relationships, the participant’s line of thinking is difficult to align
with that of the co-authors. To solve this problem, Di Giacomo
et al. [45] proposed a model that aims to present participants
with a tree diagram rather than a line diagram. In addition,
several researchers have proposed a series of design guidelines
regarding the timelines’ aesthetics and readability, which can be
roughly divided into three categories: (1) attempt to keep straight
lines to minimize line crossings [46], [47], [52], (2) the same
set of lines should appear next to each other, and (3) a certain
distance should exist between lines [52] . These design guidelines
are proposed to provide a theoretical basis for creating timelines,
which can be used to guide users to better create timelines in
authoring tools.
Authoring Tool: Creating timelines can be a time-consuming
task for novices; consequently, researchers have developed
several authoring tools for creating timelines [122], [123],
[124], [125]. Two of the most commonly used tools are
TimelineJS [124] and TimelineSetter [125]. Both tools can
automatically generate a visual timeline by filling in dates and
titles, describing events in Google spreadsheets, and linking to
corresponding images, videos, and other media. The generated
timeline can also be demonstrated in the form of slides [124],
[125]. Although these tools are popular, they have limitations.
They cannot generate timelines for nonlinear storylines or com-
plex layouts. Kim et al. [48] developed Story Explorer based
on Genette’s [126] research on story events. This tool enables
users to organize the chronology of scenes in a movie script and
explore nonlinear narratives using story curves.
However, several of the abovementioned tools can only create
linear timelines. Before Timeline Storyteller [49] was devel-
oped, designers who wanted to convey expressive stories by
using special timeline layouts (matrices, spirals, etc.) usually ap-
plied time-consuming manual approaches or programming im-
plementations. However, timelines created in using this method
often lacked guidance in balancing the perception and narra-
tive effects, resulting in being difficult to understand [127].To
solve this problem, Brehmer et al. proposed a timeline design
space [42] and further developed tools [49] that would easily
allow users to create nonlinear forms of timelines.
Although these authoring tools have lowered the threshold for
users to create timelines, several challenges at the layout and
visual encoding level still need to be addressed. For exam-
ple, when designers need to finish hundreds or thousands of
timelines, it becomes difficult to meet both the aesthetics and
readability principles of the timeline design. It is also time-
consuming and technically difficult for designers to manually
adjust the layout to avoid line crossings and overlaps. ML/AI-
supported tool: Some ML/AI-supported tools in the timeline vi-
sualization domain solve the abovementioned problems. Time-
Sets [50]
uses the “gestalt principles” of proximity and uni-
formity of association to group together the relevant events
and the use of backdrop colors to visually link collections’
activities. The tool addresses the visual inconsistency caused by
too many lines. StoryFlow [51] uses a new hybrid optimization
strategy that combines discrete (sorting and aligning line entities
to create the initial layout) and continuous (optimizing the
layout based on convex quadratic optimization) optimization
methods to quickly create timelines with aesthetic and readable
properties. However, this approach is insufficient in effectively
supporting advanced design preferences, such as changing the
general trend of lines [52]. Tang et al. [52] created iStoryline to
create more meaningful storyline visualizations that satisfy the
needs of designers. This tool integrates user interactions into an
optimization algorithm that allows users to easily create story
visualizations by modifying the automatically generated layouts
according to their preferences.
While iStoryline’s [52] interactions focus on modifying local
areas, customizing the overall layout is time-consuming and the
optimization process is unpredictable, which requires repeated
trials to optimize the results. To improve the user experience,
PlotThread [53] integrates AI agents into the authoring process.
The AI agent can decompose a given storyline into a series
of segments, allowing the user to understand the state of the
intermediate layout and predict the following action. In addi-
tion, Ellipsis [54] and TimelineCurator [55] are both timeline
authoring tools focused on the field of journalism. Ellipsis [54]
blends a domain-specific language for narrative development
with a graphical user interface framework. TimelineCurator [55]
can process unstructured documents with temporal text by using
natural language and subsequently extract the temporal text
from them along the way. These tools significantly facilitate the
management and processing of documents containing timelines.
Summary: Timelines and storylines are used to depict event
progressions. Researchers focus on timeline aesthetics and nar-
rative impact in timeline & storyline design. Users can manually
design timelines for particular scenes (i.e., movie narration) or
use authoring tools (i.e., matrices and spirals). ML/AI-supported
tools leverage intelligent algorithms to assist users in creating
narratives by sorting temporal sequences and text information
from unstructured raw data. It also enhances the aesthetics and
usability of timelines and makes writing tools more efficient.
We observed that a significant amount of text information can
be easily processed using ML/AI-supported tools, whereas a
4438 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
Fig. 6. Selected examples of data comics’ design spaces and tools. (1) Design space for data comics design patterns and illustrations of some examples
in data comics [60]. (2) Authoring tool: DataToon’s working viewport, which can create dynamic web data cartoons through pen-touch interaction [64].(3)
ML/AI-supported tool: Chartstory’s working viewport, which automates the analysis, layout, and creation of captions for data comics that tell tales using data [68].
(4) ML/AI-generator tool: Calliope [70] automatically generates visual data tales from spreadsheets and includes a story generator and editor.
limited amount of text and a particular type of timeline can be
created using authoring tools.
Following the research directions indicated by the existing
studies, we believe the following directions may be studied in
the future. The first direction is to explore the need for special
forms of timelines. Although Brehmer et al. [42], [49] proposed
six forms (i.e., linear, radial, spiral, curved, calendar, and grid)
of timeline representation, their study mainly focused on two
forms, linear and radial. Moreover, the representations of these
particular timelines determined by the researchers have not been
verified in terms of user acceptance and communication effec-
tiveness. Future work needs to validate these representations via
formal experiments and implement more real-world applications
of such new forms of timelines. The second direction is that the
existing authoring tools often overlap timelines when creating
content with multiple temporal texts, and the subjective merging
of timelines for aesthetic reasons results in the loss of informa-
tion. In the future, we also need to strengthen the research in
this area, ensuring the integrity of information while achieving
the aesthetic goal. In the realm of timelines and storylines,
ML/AI generator tools are still in their developmental stages.
While current ML/AI-supported tools can assist users in creating
timelines, they are primarily utilized for localized adjustments
and fall short in terms of fulfilling the demands of the complete
content creation process. The future holds immense potential
for the research and development of advanced ML/AI generator
tools for timelines.
VI. D
ATA COMICS
Data comics are an emerging form of narrative visual-
ization [98] that focuses on the variation of data informa-
tion and the visual presentation of data sequences [58].
Different from traditional comics, data comics must contain
data-driven content, allowing multiple visualizations to be jux-
taposed in a single panel in a comic strip layout, with annotations
and visual decorations [56]. Data comics complement the linear-
ity of narratives that are inherently imposed by movies and live
presentations while offering the flexibility of two-dimensional
spatial arrangements in infographics and annotated charts [60].
Design Space: Comics are a static format that is great for
ideation and storyboards [57]. Given that the technical barri-
ers are low, comic creation can be shared and distributed in
various formats, such as scientific papers, conference posters,
slideshows, blogs, etc. The sequential nature of data comics
and the tight integration of text and graphical information have
great potential to explain complex data and to promote visual-
ization and data literacy [59]. Data comics have the potential to
transform the manner we envision and produce infographics and
presentations because they can convert storytelling approaches
from one medium to another [60]. Furthermore, data comics
are i ncredibly flexible and communicative. They can be used to
integrate graphic elements of comic properties with textual ex-
planations and deliver visual content that requires memorization
and quick navigation [61], [62].
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4439
Although data comics have many advantages, creating good
data comics is a complex task. Designers must consider many
tradeoffs, such as balancing repetition and highlighting, and the
results rely significantly on the expertise of designers. Zhao
et al. [62] addressed the issue of data comics view ordering by
examining the narrative mechanism of comic strips. The order
of the data comic panels must be shown to help recall details in
data comics. Wang et al. [61] conducted a user study to compare
data comics and infographics in terms of the degree of clarity of
reading order and the degree of integration of text and images.
The findings demonstrated that complicated spatiotemporal data
are difficult to depict using infographics, while it is possible to
present with data comics. The participants enjoyed reading data
comics in the experiment and regarded them as more entertaining
and more effective at retaining their attention.
In another study, Zhao et al. [56] compared data comics with
PowerPoint [128]. The results also showed that data comics
are more attractive, more space-efficient, and more enjoyable
to use than PowerPoint [128]. Moreover, as the narrative style
of comics is usually linear, a possible approach is to transform
data comics into data videos with appropriate tools. Mean-
while, comics can present specific moments in separate frames,
allowing for a more focused presentation of individual data
information [62].
To help people comprehend the art of data, visualization,
narrative, and the necessity for efficient data-based communica-
tion, Bach et al. [60] offered a collection of data comic design
patterns. They also constructed six design patterns for data
comics according to different associations and layout methods.
Some researchers further validated the usefulness of this design
space in practical cases. For example, Hasan et al. [63] created
an interactive data comic in the form of a card game. Each comic
panel becomes an individual card instead of being arranged
in a fixed sequence; learners can form different storylines by
combining them in different ways. Their research showed that
transforming data comics into card games allows learners to
grasp information quickly via interaction and encourages col-
laborative thinking among participants.
Authoring Tool: Researchers have developed various tools
to create data comics to enhance the potential user experience.
DataToon [64] is a tool for creating dynamic web data comics
that support “pen+touch” interactions. The tool allows quick ex-
ploration of data, rapid generation of visual stories with custom
annotations, and interactive filtering of layout templates. How-
ever, displaying exploration data and presentation information
on the same page can cause visual distractions. Kang et al. [65]
solved this problem by proposing ToonNote. ToonNote provides
two view modes: notebook view, which adopts the format of a
traditional computing notebook to conduct data analysis, and
comic layout, which focuses on visual storytelling.
Suh et al. [67] developed CodeToon, a tool that supports the
comic creation process by adopting two mechanisms. One is to
facilitate the conception of code-related stories via metaphorical
recommendations; the other is to generate comics from stories
automatically. Both mechanisms allow users to add codes or
select code examples provided by the tool, generate a story, and
automatically produce comics. The tool allows users to quickly
and easily create high-quality coding strips. To enhance the
user experience of data comics, Wang et al. [66] proposed a
lightweight declarative scripting language, Comic Script, which
supports adding interactivity to static comics. Their work over-
came limitations of the original narrative mode, which can only
produce linear or unchangeable stories. They achieved nonlinear
narratives, personalized layouts, and explored potential user
experiences and detail levels.
ML/AI-Supported Tool: ChartStory [68] is a tool that automat-
ically converts a collection of charts into a data comic format.
It divides charts into clusters of story segments by identifying
narrative segments and then reorganizing the segments to gen-
erate a story. Users can further refine the generated data comics
via interaction.
ML/AI-Generator Tool: Fact sheets present multiple data facts
via visualization in a juxtaposed format that is highly similar
to data comics. In a fact sheet, a data story is constructed
from several facts and numerical or statistical findings produced
from data [69]. Although some comic elements are missing in
fact sheets, we still categorize them in this category because
they can be easily extended to data comics by adding some
comic-style decorations. Both DataShot [69] and Calliope [70]
can automatically generate fact sheets. DataShot [69] transforms
tabular data into fact sheets by adopting a three-step process
of fact extraction, fact combination, and visual synthesis. This
tool can effectively reduce the difficulty of data exploration,
create information presentations and enhance the readability of
data by means of expressive visual design. Calliope [70] extends
this method by automatically creating visual data stories from
spreadsheets using the Monte-Carlo tree search technique to
explore story fragments and arrange them logically. Calliope
generates coherent stories with consistent logical connections
between segments, thereby lowering the threshold for creating
data stories.
Summary: Although in its infancy now, data comics have
gained much attention in recent years. According to some
preliminary studies [56], data comics perform better than
slideshows and infographics in terms of spatial efficiency and
reader enjoyment. However, a more detailed evaluation with
a larger number of participants needs to be conducted to val-
idate its usage and effectiveness in practice. Moreover, while
data comics possess a leisurely and entertaining nature, they
are occasionally applied in serious and sensitive contexts. For
example, Charité in Berlin regularly uses comics to educate heart
surgery patients, demonstrating t he practical applicability of this
medium outside of research settings [129].
Almost all the existing tools for creating data comics support
basic data exploration and analysis. While authoring tools can
reduce the difficulty of creating data comics, they are targeted at
users who have a certain level of visualization creation skills,
which is not user-friendly to amateurs who want to create
data comics from scratch. ML/AI-supported tools and ML/AI-
generator tools for creating data comics integrate the ability to
analyze data, visualize the analyzed content, and present the
information in a narrative format. The difference between the
two types of tools is that ML/AI-generator tools can automati-
cally analyze data and arrange the data insights into comic-style
4440 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
narratives directly. By contrast, ML/AI-supported tools require
users to select valuable insights or manually layout the panels of
data comics. Reflecting on the collected work presented above,
we think that the data comics can be studied in the following
aspects in the future. First, the forms of comics vary to a large
degree, and current research has ignored how different data
types are suitable for which kind of design style and narrative
strategy [59] and which style of data comics users prefer under
what circumstances. Second, the redundant and non-data related
visual elements in data comics can sometimes be confusing and
distracting to viewers, imploring the necessity to explore how
the number of comics grids, the amount of text, the layout, and
the color scheme can be designed to be more acceptable by users.
Third, incorporating interactive features into data comics could
enhance user understanding and engagement [62], despite their
being a static medium.
VII. S
CROLLYTELLING &SLIDESHOW
The term “scrollytelling” is a combination of “story-
telling” and “scrolling.” It is a scrolling-based visual
narrative form that is widely used in data-driven arti-
cles [71]. Scrollytelling articles usually start with a full-screen
photo or video and scroll by considering the next part of the
content [71]. A similar form of visual narrative to scrollytelling is
the slideshow [75], [130]. Mckenna et al. [131] noted that many
recent websites integrate buttons and sliders, demonstrating that
the distinction between the stepper and the scroller depends on
whether the user input is clicking the stepper or scrolling the
slider. In addition, in terms of story layout, pages often appear as
slideshows or hybridsthat combine features of both slideshows
and steppers, with different animations and scrolling. They
resemble both steppers and scrollers, in which the latter form
supports scrollytelling. As the slideshow form and the hybrid
form can be interconverted, we jointly studied scrollytelling and
slideshow.
Design Space: Scrollytelling articles are usually text-centered
and use multimedia elements s uch as images and videos to assist
narrative storytelling [72]. Various transition styles between
pages can be triggered by scrolling. The choice of transition
styles is usually determined by the relationship between facts
(e.g., comparative, similar, and sequential). Scrollytelling can be
used as visual cues, such as highlighting facts in visualization,
to direct attention or to indicate stages to assist browsing [79].
A slideshow is composed of a collection of slides instead of
continuous content in scrollytelling. Elias et al. [73] reviewed the
elements that comprise a slideshow presentation, identifying six
typical elements: slide title, text box, image, embedded content,
equations, and tables to ensure accessibility. Hullman et al. [74]
analyzed 42 narrative visualizations in the form of slides and in-
vestigated how the choice of order affects narrative visualization.
For slideshows, the narrative is told by discrete clicking, tapping,
keying, or swiping dynamic slideshows, allowing the designer
to control the storytelling pace. In addition, users can add or
remove pages to the slideshows according to their needs and can
exit the presentation page at any time. Slide layouts can show
continuous progress between slides or support nonlinear breaks
in the narrative [75]. However, when readers have to navigate
too many pages, they may eventually suffer from boredom, but
too few pages also hinder the user from remembering the story.
Therefore, the story’s length in the slides must be accurately
established [79].
Authoring Tool: Scrollytelling is a challenging task. Idyll [76]
provides a “scroller” component for building scrolling narra-
tives, allowing users to control document style, layout, and
control pages by clicking or scrolling. Sultanum et al. [77]
explored a data-driven approach to article story creation that
separates semantic, textual, and graphical links and story layout
forms. On this basis, researchers developed VizFlow [77],a
tool for creating dynamic data-driven articles. With a text-chart
linking strategy, VizFlow allows users to create dynamic layouts
for static data-driven articles.
Users have more options or tools to create slideshows com-
pared with scrollytelling. The most popular ones are Power-
Point [128], Keynote [132] and Google Slides [133]. This type
of software aims at helping users manually create a set of
slideshows that contain text, images, and other multimedia con-
tent. Providing abundant design templates allows users to focus
on the information they want to present rather than spending
plenty of time in the visual layout [134]. ML/AI-supported tool:
Users often employ slideshows for presentations or speeches.
However, it usually takes considerable time and effort to create
slideshows before the presentation, and for impromptu speech,
users cannot create slideshows in such a short time. Tedric [78]
is a tool to construct a coherent slideshow from a single subject
idea. This tool blends a semantic word web with text and picture
data sources to produce a slideshow that matches the subject.
The user studies conducted by the authors demonstrated that the
use of the tool significantly reduces the barriers to impromptu
speech and saves users much time.
ML/AI-Generator Tool: Leake et al. [135] developed a system
that converts text into speech by recognizing specific words
in each sentence and automatically selects relevant images to
transform these texts into audiovisual slides. Lu et al.
[79]
proposed a method for automatically generating scrollytelling
visualizations. The method begins by listing the data facts for a
given dataset, scores the facts and arranges them into stories, and
then produces visualizations, transitions, and text descriptions
for the scrolling display. However, since existing work in this
category is mostly prototypes, practical use of ML/AI-generator
tools for scrollytelling remains unproven.
Summary: Scrollytelling is a scrolling view of content, an
interaction that is consistent with our everyday behavior of
browsing web pages and articles on mobile devices. A slideshow
is another common display that is a step-based display. Although
we often encounter the two forms of narratives in daily practice,
academic research on slideshow and scrollytelling is generally
lacking. First, as mentioned in the timeline chapter, nonlinear
narratives are more likely to engage users, and scrollytelling and
slideshows can use both linear and nonlinear ways of presenting
information. Scrollytelling and slideshows allow the audiences
to explore different paths by referring to the content based on
their own interests and needs. Instead of following a predeter-
mined linear sequence, the audience can select their own journey
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4441
Fig. 7. Selected examples of scrollytelling design spaces and tools. (1) Design space: Outlines how to use automatic sequencing in design systems to guide
non-designers in making structured decisions when creating narrative visualizations [74]. (2) Authoring tool: uses text-chart links to transform static data-driven
articles containing text and charts into dynamic content [77]. (3) ML/AI-supported tool: Tedric system workflow, which can be used to train presentation skills,
reduce barriers to impromptu speaking and generate slideshow based on audience suggestions [78]. (4) ML/AI-generator tool: A method for automatically generating
scrollytelling v isualizations [79].
by referring to the information by clicking on links, making
selections, or following different branches of the narrative. This
approach gives the audience more control over the pace and order
of information, allowing them to focus on the aspects that are
most relevant or meaningful to them. Future work can investigate
whether other nonlinear narrative structures are also suitable
for scrollytelling or slideshow. Second, existing research has
focused on different media combinations, such as images, text,
and video, with minimal research on data visualization and
intelligent tools. In particular, slideshow creation tools are in-
explicitly designed to create narrative visualizations. Therefore,
future research can investigate needs and design requirements
for narrative visualization, thus providing more support to create
data-driven scrollytelling and slideshows.
VIII. D
ATA VIDEO
Data video is a narrative visualization type [98] that
combines data visualization with motion graphics and
tells data-driven stories. Data videos can present viewers
with diverse visual information in a short period, and therefore,
they are widely used in disseminating data information [80],
[90]. Design space: Researchers have primarily focused on
understanding, creating, and disseminating data videos. Amini
et al. [80] first proposed a visual narrative structure theory,
in which the narrative structure of data video can be divided
into four roles: establisher (E), initial (I), peak (P), and release
(R). On this basis, Cao et al. [81] presented a more extensive
taxonomy of data video, including four narrative structures, five
main genres, and six narrative qualities. Users can quickly find
specific types of data videos with the help of this classification.
These studies provide a solid foundation for designers to create
data videos. Xu et al. [82] considered data videos’ opening
narrative and visual presentation design. They proposed six
cinematic opening styles (symbolism and metaphor, camera eye,
big bang, old footage, and ending first styles) and 28 design
guidelines for the six styles.
Visual narratives in data videos are usually performed using
animation because animation can represent temporal changes
and enhance the comprehension and user engagement of data
stories [83]. Shi et al. [88] analyzed 43 animation techniques for
narrative visualizations and categorized eight narrative strategies
(e.g., emphasis, suspense, and comparison) to construct a design
space.This design space describes data video production and in-
tegration with visual narrative strategies, providing useful design
suggestions and weakening barriers to expressive data video
creation. By examining animated data charts, Tang et al. [89]
created a design space for data videos with five dimensions: data,
motion, layout, duration, and narrative. Moreover, they proposed
20 design guidelines based on these dimensions. In addition,
other researchers have conducted studies on how to i ncrease the
effectiveness of data video communication. Sallam et al. [84]
found that for a problem with no clear solution, a better option
is to present it in a data video because the audience may feel
high levels of negative emotions. To improve the quality and
reduce the complexity of data video, Wang et al. [85] proposed
4442 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
Fig. 8. Selected examples of data video’s design spaces and tools. (1) Design space: Amini et al. [80] states that E+I+PR+ in data video is the most balanced
narrative structure. (2) Authoring tool: example of a data-driven video generated using DataClip for financial analysis [90]. (3) ML/AI-supported tool: Data
Animator’s storyboard editing work window. It is able to segment complicated animations by stacking keyframes and using data parameters to stagger the start
time and modify the pace of animated objects in the timeline view [93]. (4) ML/AI-generator tool: Autoclips automatically generates keyframes for data video
based o n a series of data facts [97].
nonlinear time mapping and foreshadowing. The “foreshadow-
ing” technique, as described by researchers, is only applicable
to animated stacked images. Li et al. [86] expanded on this
by formally defining visual foreshadowing, a technique that
addresses the problem of time-consuming videos that ignore
the viewer’s attention. Shu et al. [87] examined the Data-GIFs
design space and offered recommendations.
Authoring Tool: Producing data videos is time-consuming
because it requires collaboration between people from different
backgrounds (e.g., data analysts to generate data and insights,
scripters to write narratives, and motion designers and graphics
experts to produce video assets). Each element may depend on
one or more particular software tools [90]. DataClips [90] pro-
vides a set of data clip libraries that allow amateurs to combine
data-driven clips to form longer sequences of data videos. Lan
et al. [91] developed Kineticharts, a chart animation scheme
for conveying emotions, based on the animations provided in
DataClips [90]. Compared with DataClips, Kineticharts [91]
can enhance the emotional engagement of users by improving
the presentation of the story without hindering users’ under-
standing of the data. In addition, Chen et al. [92] developed
VisCommentator, a tool for analyzing ball sports videos in sports
programs, facilitating the creation of enhanced sports analysis
videos through data insights and visualization suggestions.
ML/AI-Supported Tool: Researchers have designed and de-
veloped tools to transform static visualizations into dynamic
versions. Data Animator [93] utilizes the Data Illustrator [136]
framework to match two static visual objects and generate
automatic transitions by default. The tool also supports divid-
ing complex animations into segments by layering keyframes,
utilizing data characteristics to stagger the start time, and ad-
justing the pace of animated objects through a timeline view.
Similarly, InfoMotion [94] can build data films by extracting
the graphical attributes of infographics, understanding its un-
derlying information structure, and adding animation effects to
the visual pieces of the infographic in chronological sequence.
As InfoMotion [94] is built into PowerPoint [128] as a plug-in,
it can automatically link a variety of built-in animation effects
to the visual parts of slides, which is excellent for speeding up
data video production. This tool [94] is also easier to create data
videos than Data Animator [93] because it can only use data in
Data Illustrator [136] format. In addition, while Gemini2 [95]
and Cast [96] are not dedicated tools for creating data videos,
both tools can build keyframes for charts. Gemini2 [95] focuses
on helping users create animations by referring to keyframe sug-
gestions. Similarly, Cast [96] allows users to manipulate directly
to change the parameters of animation effects (e.g., animation
type and jogging function) and refine animation specifications
(e.g., adjusting keyframes to play across tracks and adjusting
delays) by providing a GUI interface. ML/AI-supported tools
for data video creation identify existing infographic elements
and convert them into dynamic video clips, while authoring tools
provide a library of data clips for direct use.
ML/AI-Generator Tool: While the abovementioned technolo-
gies ease the design process, data videos are still difficult
to create because users must select which visualizations and
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4443
animations to utilize and how to assemble a cohesive video.
The aforementioned problem was solved by the emergence of
AutoClips [97] is a tool that automatically creates a data video
from a series of facts, saving users time and reducing complexity
by eliminating the need for data analysis and animation creation
using video motion software. However, it has limitations, as it
only supports tabular data and tends to favor datasets with diverse
column types.
Summary: Data video has become popular owing to the
growth of social media platforms. Research on data video has
also received much attention in recent years. At the design space
level, researchers have explored the understanding, creation,
and dissemination of data videos to help better understand the
components of data videos. These design spaces and guidelines
provide the theoretical basis for developing authoring tools,
ML/AI-supported tools, and ML/AI-generator tools. Authoring
tools simplify the creation of data videos by offering a library
of existing data clips that can be replicated. ML/AI-supported
tools focus on how to identify existing s tatic visualization ele-
ments and convert them into dynamic videos. ML/AI-generator
tools can automatically generate data videos directly from
input data.
However, certain issues still need further investigation. First,
existing automatic tools for creating data videos are still limited
to a few visualization genres and input formats. For example,
AutoClips [97] only supports tabular data, limiting the visual
display possibilities of data video. More tools are needed to
handle various data types, such as spatial-temporal data and
textual data, which are essential for constructing diverse data
narratives. Second, researchers also a need to study how the
speed, continuity, and smoothness of animations in data videos,
the transitions between charts and graphs, and the embellishment
effects added to the videos would affect the understanding and
overall experience of readers [91].
IX. D
ISCUSSIONS AND FUTURE WORK
In this section, we outline the current limitations and future
research opportunities of design spaces and tools at different
automation levels for narrative visualization.
Design space aims to describe all the possible design
aspects for various narrative genres. The summary of
the design space allows us to capture some implicit
knowledge of visual designers and practitioners [104]. Most ex-
isting studies propose clear design guidelines in specific design
scenarios [60], [89], [137]. However, the design space articles
on the different narrative genres vary in focus. For annotated
charts and infographic genres, the focus is on how to effectively
create a correct and aesthetically appealing visualization. For
timeline, data comics, scrollytelling, and data video genres, the
focus is more on exploring the narrative structure. In particular,
data video pays special attention to creating animations, while
other genres focus more on static presentations. Below are the
major future research directions.
Simplify and Validate the Design Space. Creators can gen-
erally access many existing visualization design guidelines,
but choosing the right guidelines is difficult for them. More-
over, design guidelines often fall short in explaining when it
is more appropriate to use, and lack proper validation [89].
For example, researchers have proposed visualization design
process frameworks [138], [139], but have not explained what
scenarios and how to use these frameworks for visualization
design. Therefore, a potential research direction is to validate
the usage of various design spaces and classify them according
to application domains. Amateurs may also be provided with an
overview of design spaces to tackle specific design problems.
For more experienced designers, we could pay more attention to
the subtle design guidelines that can improve the user experience
and user perception in the visualization.
Explore New Narrative Structures. The existing narrative
structures are primarily derived from movies or other audio-
visual content [80]. Information on the application of narrative
structures in novels and plays in narrative visualizations is
generally lacking in the extant literature. Due to the different
characteristics of various narrative genres, the choice of narrative
structure can also be different. For example, a timeline mainly
presents content in a linear narrative sequence [50], whereas in
data videos, using a nonlinear narrative approach is more likely
to engage the audience [43]. Therefore, a potential research
direction is to explore different data types and which narrative
structure is more suitable for different narrative genres.
Explore other Narrative Visualization Genres. Some tradi-
tional visualization genres that focus more on visual analytics
are embracing narrative and storytelling concepts. Suprata [137]
noted that adding narrative attributes to dashboards allows users
to become more aware of their goals and how to take action
next. Fernandez Nieto et al. [140] enhanced teachers’ guidance
of the content by including narrative attributes in designing
learning analytics dashboards. With more attention and practical
applications of narrative attributes to traditional dashboards, nar-
rative dashboards can be another future narrative genre. Mean-
while, some new genres of visualizations have emerged, such as
immersive visualization [141] and data physicalization [142],
which can also be developed with narrative characteristics. The
potential research direction of narrative immersive visualization
is a more in-depth exploration of data types, spatial layouts,
and user interactions for narrative communication in the virtual
environment. The physicalization of data encodes information
in a perceptible form, allowing users to explore using all their
senses and motor skills [143]. More research on narrative data
physicalization is needed to understand the design space, data
production process, and benefits compared to flat visualization
or virtual presentation [144].
Authoring tools aims to facilitate the visualization cre-
ation process with controllable interactions. These tools
include stand-alone applications [10], [64], web-based
tools [28], [124], and authoring tools that combine with office
software [29], [30]. These tools offer users enough control
to create customized visualizations, even complex ones that
cannot be supported by automated tools. Although these au-
thoring tools significantly improve the efficiency of creating
narrative visualizations, most tools are aimed at users with
4444 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
a certain level of expertise. For example, authoring tools for
infographics and timelines require users to have visual design
skills, while data videos require users to have video editing
skills. Future research may invest more efforts in the following
directions.
Develop Flexible Interfaces for Authoring Tools. Among
narrative authoring tools, a few tools can be used to freely
draw creative patterns on a screen, including DataSelfies [33],
DataInk [34] and SketchStory [35] for infographics and Data-
Toon [64] for data comics. However, other narrative visualiza-
tion tools are relatively lacking. Providing more flexible inter-
face methods can help designers achieve more creative ideas and
more artistic effects in creating various narrative visualizations.
Develop More Interactive Visualizations. Among the existing
narrative visualization genres, only scrollytelling and slideshow
have strong interactive properties, while annotated charts,
infographics, data comics, and data videos are mostly static vi-
sualizations that lack interactive functionality. However, studies
have proven that by providing interactivity [66] and adding inter-
esting [26], [27] and emotional factors [25] to the visualization,
users are more likely to memorize the information. Therefore, in
future research, a possible approach is to explore more narrative
genres of interactive visualizations and to add interesting and
emotional elements.
ML/AI-supported tools are designed to assist users in
visualization creation by applying intelligent algorithms
and techniques. Such tools can provide recommenda-
tions or guide the user via the creation process. ML/AI-supported
tools for narrative visualizations can serve a wider range of
users than authoring tools. For example, designers who lack
data analytic skills can easily create data comics with the data
analysis capabilities of ML/AI-supported tools; data analysts
who lack design skills can use ML/AI-supported tools to create
more aesthetically pleasing timelines or data videos.
However, the automatic goals and functions of current
ML/AI-supported tools for different narrative visualization
types are different. For example, tools for annotated charts,
infographics, and data comics have the auxiliary function of
identifying and parsing visualizations. Among them, the purpose
of annotated chart recognition is to add annotations to facilitate
comprehension of the visual story; the purpose of infographic
recognition is to create new visualizations based on the orig-
inal visual styles; and the purpose of data comics recognition
is to transform visual content into the comic layout. ML/AI-
supported tools for timelines focus on placing timeline text and
optimizing visual aesthetic effects, while data videos pay more
attention to the creation of animation. By summarizing existing
research in ML/AI-supported tools for narrative visualization,
the following directions can be studied.
Additionally, most existing tools’ annotations only explain
statistical information on a single chart [32], and there is a lack
of studies that apply intelligent techniques to extract contextual
information for building visualizations with narrative structures.
Improve the Reusability of Existing Visualizations. In practice,
the majority of charts are saved as bitmap pictures. Although
they are simple to spread and use, they are difficult to modify.
VisCode [145] and Chartem [146] can store and hide the original
data information inside the picture of a chart. However, only
rudimentary visual charts are supported by these tools. There-
fore, tools to support the recognition and reprocessing of more
complex visual charts and more diverse narrative genres must
be developed. By improving the reusability of existing narrative
visualizations, amateurs are able to create more visual stories
efficiently and effectively [93].
Facilitate the Adaptability of Different Software. Some exist-
ing tools are integrated with office software. For example, all
the features of the DataComicsJS [56] tool can be replicated
in presentation tools (e.g., Microsoft PowerPoint [128]) and
drawing tools (e.g., Adobe Illustrator [117]). Chartreuse [29]
and InfoNice [30] are also both integrated into Microsoft Office
software in the form of plug-ins. After incorporating the natural
language algorithms of intelligent tools into productivity soft-
ware, the corresponding functions can work in the background.
For example, when a statement can be enhanced with visualiza-
tion, a message can ask if the user wants to use a recommended
chart [39]. This way, ML/AI-supported tools can reach a wider
audience.
ML/AI-generator tools are more intelligent than the pre-
vious three types of tools in that they require minimal
or no user involvement in the entire creation process.
These tools automate the analysis of data and directly generate
a complete narrative visualization without user intervention.
ML/AI-generator tools mostly target amateurs. The develop-
ment of such tools has gradually increased in the past decade.
As visual communication becomes increasingly important in our
daily life, we believe that such tools can play an important role in
the creation of narrative visualization. The following directions
can be studied in terms of understanding user intent to improve
accuracy and efficiency.
Improve data analysis capability to identify user design in-
tent. Among the current six genres of narrative visualizations,
ML/AI-generator tools that can be used for timelines are gener-
ally lacking. Even though certain intelligent tools can be used
to create a timeline, they only modify the local area. Complet-
ing the entire creation remains time-consuming. While several
ML/AI-generator tools for other genres, such as Autoclips [97],
can analyze the data and extract essential parameters from t he
dataset, the final output is not satisfactory when facing different
datasets, different contexts, or more complex data types. There-
fore, the ability of ML/AI-generator tools to analyze complex
data in the future must be improved. In addition, a possible
direction is to study how to input the user’s creative intent into the
automation process and at which point in the creation process;
in this manner, the user’s intent can be fully grasped to achieve
the most satisfying outcome.
Develop narrative recommendation tools to clarify design
intent. In statistical charts, researchers have developed many
visual recommendation systems such as Voyager [147] and
SeeDB [148]. However, research on such tools, specifically
for narrative visualization, is lacking. This situation can be
explained by recommendation methods being based on data
characteristics or design guidelines rather than the user’s de-
sign intent. Creating a narrative visualization recommendation
platform to store both design processes and outcomes could be a
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4445
potential research direction. By analyzing the collected informa-
tion, we could identify the user’s design intent with the help of
machine learning algorithms [149]. Such recommendation tools
can provide users with abundant design ideas and recommenda-
tions in the pre-creation stage of narrative visualizations.
This study outlines four narrative visualization tools at the
automation level. Furthermore, although the different tools are
divided into different narrative visualization genres in this study,
it does not mean that these tools can only create one genre of
narrative visualization. The tools present certain compatibility
across different genres of narrative visualizations. However,
novice users need to undergo a learning curve to varying degrees
for either visualization tool type. Moreover, these visualization
tools can only tell users how the data have changed, without
explaining why it has changed, suggesting that the user will still
need to analyze the reasons for the data change results. More-
over, a one-size-fits-all tool to handle all scenarios to address
different users and goals does not exist. Therefore, all four levels
of automation have their unique values and are worth further in-
vestigation, from pure manual design following design theories
to the ultimate ML/AI-generated tools that support automation
in the whole visualization creation pipeline. Furthermore, with
the development of AI technology and the need to create and
share data visualization by amateurs, ML/AI-supported tools
and ML/AI-generated tools are becoming more popular in both
research and various application domains. ML/AI-supported
tools, with human participation and machine assistance, offer a
superior user experience and more diverse design opportunities
compared to authoring and generator tools. More efforts can
be invested in such human-centered ML/AI-supported narrative
visualization tools in the future.
X. C
ONCLUSION
In this study, we systematically reviewed 105 papers and tools
to study how automation can progressively engage in visualiza-
tion design and narrative processes to help users create narrative
visualizations more easily, effectively, and efficiently. We have
summarized six genres of narrative visualization (i.e., anno-
tated charts, infographics, timeline & storyline, data comics,
scrollytelling & slideshow, and data videos) based on previous
research, and four types of tools (i.e., design space, authoring
tool, ML/AI-supported tool, ML/AI-generator tool) based on
the intelligence and automation level of the tools. This study
enables users to comprehend the explicit and implicit design el-
ements of various narrative visualization genres, facilitating the
selection of appropriate tools for visual storytelling. However,
our survey excluded scientific visualization. In the field of sci-
entific visualization, narrative visualization has been applied in
scenarios such as climate or medical condition narratives [150].
We believe that more research and tools in scientific visualization
storytelling can be performed and developed in the future. We
further discuss new research challenges and outline potential
directions for future research and implementation.
A
CKNOWLEDGMENTS
We would like to thank anonymous reviewers for their con-
structive feedback.
R
EFERENCES
[1] R. Kosara and J. Mackinlay, “Storytelling: The next step for visualiza-
tion, Computer, vol. 46, no. 5, pp. 44–50, 2013.
[2] J. Hullman and N. Diakopoulos, “Visualization rhetoric: Framing effects
in narrative visualization, IEEE Trans. Vis. Comput. Graph., vol. 17,
no. 12, pp. 2231–2240, Dec. 2011.
[3] Q. Wang, Z. Chen, Y. Wang, and H. Qu, “A survey on ML4VIS: Apply-
ing machine learning advances to data visualization, IEEE Trans. Vis.
Comput. Graph., vol. 28, no. 12, pp. 5134–5153, Dec. 2022.
[4] A. Wu et al., “AI4VIS: Survey on artificial intelligence approaches for
data visualization, IEEE Trans. Vis. Comput. Graph., vol. 28, no. 12,
pp. 5049–5070, Dec. 2022.
[5] S. Zhu, G. Sun, Q. Jiang, M. Zha, and R. Liang, “A survey on automatic
infographics and visualization recommendations, Vis. Informat.,vol.4,
no. 3, pp. 24–40, 2020.
[6] B. Lee, N. H. Riche, P. Isenberg, and S. Carpendale, “More than telling
a story: Transforming data into visually shared stories, IEEE Comput.
Graph. Appl., vol. 35, no. 5, pp. 84–90, Sep./Oct. 2015.
[7] M. A. Borkin et al., “What makes a visualization memorable?, IEEE
Trans. Vis. Comput. Graph., vol. 19, no. 12, pp. 2306–2315, Dec. 2013.
[8] Michelle A. Borkin et al., “Beyond memorability: Visualization recog-
nition and recall, IEEE Trans. Vis. Comput. Graph., vol. 22, no. 1,
pp. 519–528, Jan. 2015.
[9] H.-K. Kong, Z. Liu, and K. Karahalios, “Internal and external visual cue
preferences for visualizations in presentations, Comput. Graph. Forum,
vol. 36, no. 3, pp. 515–525, 2017.
[10] D. Ren, M. Brehmer, B. Lee, T. Höllerer, and E. K. Choe, “ChartAccent:
Annotation for data-driven storytelling, in Proc. IEEE Pacific Visual.
Symp., 2017, pp. 230–239.
[11] Y. Chen, J. Yang, S. Barlowe, and D. H. Jeong, “Touch2Annotate:
Generating better annotations with less human effort on multi-touch
interfaces, in Proc. Extended Abstr. Hum. Factors Comput. Syst., 2010,
pp. 3703–3708.
[12] Y. Chen, S. Barlowe, and J. Yang, “Click2Annotate: Automated insight
externalization with rich semantics, in Proc. IEEE Symp. Vis. Analytics
Sci. Technol., 2010, pp. 155–162.
[13] E. Kandogan, “Just-in-time annotation of clusters, outliers, and trends in
point-based data visualizations, in Proc. IEEE Conf. Vis. Analytics Sci.
Technol., 2012, pp. 73–82.
[14] C. Bryan, K.-L. Ma, and J. Woodring, “Temporal summary images:
An approach to narrative visualization via interactive annotation gen-
eration and placement, IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1,
pp. 511–520, Jan. 2017.
[15] S. Latif, Z. Zhou, Y. Kim, F. Beck, and N. W. Kim, “Kori: Interactive
synthesis of text and charts in data documents, IEEE Trans. Vis. Comput.
Graph., vol. 28, no. 1, pp. 184–194, Jan. 2022.
[16] A. Fan, Y. Ma, M. Mancenido, and R. Maciejewski, Annotating line
charts for addressing deception, in Proc. CHI Conf. Hum. Factors
Comput. Syst., 2022, pp. 1–12.
[17] N. Kong and M. Agrawala, “Graphical overlays: Using layered elements
to aid chart reading, IEEE Trans. Vis. Comput. Graph., vol. 18, no. 12,
pp. 2631–2638, Dec. 2012.
[18] A. Srinivasan, S. M. Drucker, A. Endert, and J. Stasko, Augmenting
visualizations with interactive data facts to facilitate interpretation and
communication, IEEE Trans. Vis. Comput. Graph., vol. 25, no. 1,
pp. 672–681, Jan. 2019.
[19] H. Subramonyam and E. Adar, “Smartcues: A multitouch query approach
for details-on-demand through dynamically computed overlays, IEEE
Trans. Vis. Comput. Graph., vol. 25, no. 1, pp. 597–607, Jan. 2019.
[20] J. Hullman, N. Diakopoulos, and E. Adar, “Contextifier: Automatic
generation of annotated stock visualizations, in Proc. SIGCHI Conf.
Hum. Factors Comput. Syst., 2013, pp. 2707–2716.
[21] C. Liu, L. Xie, Y. Han, D. Wei, and X. Yuan, “Autocaption: An approach to
generate natural language description from visualization automatically,
in Proc. IEEE Pacific Visual. Symp., 2020, pp. 191–195.
[22] C. Cmeciu, M. Manolache, and A. Bardan, “Beyond the narrative visu-
alization of infographics on European issues, Stud. Media Commun.,
vol. 4, no. 2, pp. 54–69, 2016.
[23] L. Harrison, K. Reinecke, and R. Chang, “Infographic aesthetics: De-
signing for the first impression, in Proc. 33rd Annu. ACM Conf. Hum.
Factors Comput. Syst., 2015, pp. 1187–1190.
[24] K. T. Lyra et al., “Infographics or graphics text: Which material is best
for robust learning?, in Proc. IEEE 16th Int. Conf. Adv. Learn. Technol.,
2016, pp. 366–370.
[25] X. Lan, Y. Shi, Y. Zhang, and N. Cao, “Smile or scowl? looking at
infographic design through the affective lens, IEEE Trans. Vis. Comput.
Graph., vol. 27, no. 6, pp. 2796–2807, Jun. 2021.
4446 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
[26] N. Diakopoulos, F. Kivran-Swaine, and M. Naaman, “Playable data:
Characterizing the design space of game-y infographics, in Proc.
SIGCHI Conf. Hum. Factors Comput. Syst., 2011, pp. 1717–1726.
[27] Joanna C. Dunlap and Patrick R. Lowenthal, “Getting graphic about
infographics: Design lessons learned from popular infographics, J. V is.
Lit., vol. 35, no. 1, pp. 42–59, 2016.
[28] N. W. Kim et al., “Data-driven guides: Supporting expressive design for
information graphics, IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1,
pp. 491–500, Jan. 2017.
[29] W. Cui et al., “A mixed-initiative approach to reusing infographie charts,
IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 173–183, Jan. 2022.
[30] Y. Wang et al., “InfoNice: Easy creation of information graphics, in
Proc. CHI Conf. Hum. Factors Comput. Syst., 2018, pp. 1–12.
[31] J.E. Zhang, N. Sultanum, A. Bezerianos, and F. Chevalier, “DataQuilt:
Extracting visual elements from images to craft pictorial visual-
izations, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2020,
pp. 1–13.
[32] D. Coelho and K. Mueller, “Infomages: Embedding data into thematic
images, Comput. Graph. Forum, vol. 39, no. 3, pp. 593–606, 2020.
[33] N. W. Kim, H. Im, N. H. Riche, A. Wang, K. Gajos, and H. Pfister,
“DataSelfie: Empowering people to design personalized visuals to rep-
resent their data, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2019,
pp. 1–12.
[34] H. Xia, N. H. Riche, F. Chevalier, B. De Araujo, and D. Wigdor, “Dataink:
Direct and creative data-oriented drawing, in Proc. CHI Conf. Hum.
Factors Comput. Syst., 2018, pp. 1–13.
[35] B. Lee, R. H. Kazi, and G. Smith, “SketchStory: Telling more engaging
stories with data through freeform sketching, IEEE Trans. Vis. Comput.
Graph., vol. 19, no. 12, pp. 2416–2425, Dec. 2013.
[36] M. Lu et al., “Exploring visual information flows in infographics, in
Proc. CHI Conf. Hum. Factors Comput. Syst., 2020, pp. 1–12.
[37] A. Tyagi, J. Zhao, P. Patel, S. Khurana, and K. Mueller, “User-
centric semi-automated infographics authoring and recommendation,
2021, arXiv:2108.11914.
[38] L.-P. Yuan, Z. Zhou, J. Zhao, Y. Guo, F. Du, and H. Qu, “InfoCol-
orizer: Interactive recommendation of color palettes for infographics,
IEEE Trans. Vis. Comput. Graph., vol. 28, no. 12, pp. 4252–4266,
Dec. 2021.
[39] W. Cui et al., “Text-to-Viz: Automatic generation of infographics from
proportion-related natural language statements, IEEE Trans. Vis. Com-
put. Graph., vol. 26, no. 1, pp. 906–916, Jan. 2020.
[40] C. Qian, S. Sun, W. Cui, J.-G. Lou, H. Zhang, and D. Zhang, “Retrieve-
then-adapt: Example-based automatic generation for proportion-related
infographics, IEEE Trans. Vis. Comput. Graph., vol. 27, no. 2,
pp. 443–452, Feb. 2021.
[41] Z. Chen, Y. Wang, Q. Wang, Y. Wang, and H. Qu, “Towards automated
infographic design: Deep learning-based auto-extraction of extensible
timeline, IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 917–926,
Jan. 2019.
[42] M. Brehmer, B. Lee, B. Bach, N. H. Riche, and T. Munzner, “Timelines
revisited: A design space and considerations for expressive storytelling,
IEEE Trans. Vis. Comput. Graph., vol. 23, no. 9, pp. 2151–2164,
Sep. 2017.
[43] X. Lan, X. Xu, and N. Cao, “Understanding narrative linearity for telling
expressive time-oriented stories, in Proc. CHI Conf. Hum. Factors
Comput. Syst., 2021, pp. 1–13.
[44] B. Bach, C. Shi, N. Heulot, T. Madhyastha, T. Grabowski, and P. Drag-
icevic, “Time curves: Folding time to visualize patterns of temporal
evolution in data, IEEE Trans. Vis. Comput. Graph., vol. 22, no. 1,
pp. 559–568, Jan. 2018.
[45] E. Di Giacomo, W. Didimo, G. Liotta, F. Montecchiani, and A. Tappini,
“Storyline visualizations with ubiquitous actors, in Proc. Int. S ymp.
Graph Drawing Netw. Visual., 2020, pp. 324–332.
[46] Y. Tanahashi and K.-L. Ma, “Design considerations for optimizing sto-
ryline visualizations, IEEE Trans. Vis. Comput. Graph., vol. 18, no. 12,
pp. 2679–2688, Dec. 2012.
[47] M. Gronemann, M. Jünger, F. Liers, and F. Mambelli, “Crossing mini-
mization in storyline visualization, in Proc. Int. Symp. Graph Drawing
Netw. Visual., 2016, pp. 367–381.
[48] N. Wook Kim, B. Bach, H. Im, S. Schriber, M. Gross, and H. Pfister,
“Visualizing nonlinear narratives with story curves, IEEE Trans. Vis.
Comput. Graph., vol. 24, no. 1, pp. 595–604, Jan. 2018.
[49] M. Brehmer et al., “Timeline storyteller: The design & deployment of
an interactive authoring tool for expressive timeline narratives, in Proc.
Comput. Journalism Symp.2019, pp. 1–5.
[50] P. H. Nguyen, K. Xu, R. Walker, and B. W. Wong, “TimeSets: Timeline
visualization with set relations, Inf. Visual., vol. 15, no. 3, pp. 253–269,
2016.
[51] S. Liu, Y. Wu, E. Wei, M. Liu, and Y. Liu, “StoryFlow: Tracking the
evolution of stories, IEEE Trans. Vis. Comput. Graph., vol. 19, no. 12,
pp. 2436–2445, Dec. 2013.
[52] T. Tang, S. Rubab, J. Lai, W. Cui, L. Yu, and Y. Wu, “iStoryline: Effec-
tive convergence to hand-drawn storylines, IEEE Trans. Vis. Comput.
Graph., vol. 25, no. 1, pp. 769–778, Jan. 2019.
[53] T. Tang et al., “PlotThread: Creating expressive storyline visualizations
using reinforcement learning, IEEE Trans. Vis. Comput. Graph., vol. 27,
no. 2, pp. 294–303, Feb. 2020.
[54] A. Satyanarayan and J. Heer, Authoring narrative visualizations
with ellipsis, Comput. Graph. Forum, vol. 33, no. 3, pp. 361–370,
2014.
[55] J. Fulda, M. Brehmer, and T. Munzner, “TimeLineCurator: Interactive
authoring of visual timelines from unstructured text, IEEE Trans. Vis.
Comput. Graph., vol. 22, no. 1, pp. 300–309, Jan. 2016.
[56] Z. Zhao, R. Marr, and N. Elmqvist, “Data comics: Sequential art for
data-driven storytelling, Univ. of Maryland, Tech. Rep. HCIL-2015–15,
2015.
[57] Z. Wang, H. Dingwall, and B. Bach, “Teaching data visualization and
storytelling with data comic workshops, in Proc. Extended Abstr. CHI
Conf. Hum. Factors Comput. Syst., 2019, pp. 1–9.
[58] B. Bach, N. H. Riche, S. Carpendale, and H. Pfister, “The emerging genre
of data comics, IEEE Comput. Graph. Appl., vol. 37, no. 3, pp. 6–13,
May/Jun. 2017.
[59] Z. Wang, S. Wang, M. Farinella, D. M.-Rust, N. H. Riche, and B.
Bach, “Comparing effectiveness and engagement of data comics and
infographics, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2019,
pp. 1–12.
[60] B. Bach, Z. Wang, M. Farinella, D. Murray-Rust, and N. H. Riche,
“Design patterns for data comics, in Proc. CHI Conf. Hum. Factors
Comput. Syst., 2018, pp. 1–12.
[61] Z. Wang, J. Ritchie, J. Zhou, F. Chevalier, and B. Bach, “Data
comics for reporting controlled user studies in human-computer inter-
action, IEEE Trans. Vis. Comput. Graph., vol. 27, no. 2, pp. 967–977,
Feb. 2020.
[62] Z. Zhao, R . Marr, J. Shaffer, and N. Elmqvist, “Understanding partition-
ing and sequence in data-driven storytelling, in Proc. Int. Conf. Inf.,
Springer, 2019, pp. 327–338.
[63] M. T. Hasan, A. Wolff, A. Knutas, A. Pässilä, and L. Kantola, “Playing
games through interactive data comics to explore water quality in a lake:
A case study exploring the use of a data-driven storytelling method in
co-design, in Proc. CHI Conf. Hum. Factors Comput. Syst. Extended
Abstr., 2022, pp. 1–7.
[64] N. W. Kim et al., “DataToon: Drawing dynamic network comics with
pen touch interaction, in Proc. CHI Conf. Hum. Factors Comput. Syst.,
2019, pp. 1–12.
[65] D. Kang, T. Ho, N. Marquardt, B. Mutlu, and A. Bianchi, “ToonNote:
Improving communication in computational notebooks using interactive
data comics, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2021,
pp. 1–14.
[66] Z. Wang, H. Romat, F. Chevalier, N. H. R iche, and B. Bach, “Interactive
data comics, in IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1,
pp. 944–954, Jan. 2022.
[67] S. Suh, J. Zhao, and E. Law, “CodeToon: Story ideation, auto
comic generation, and structure mapping for code-driven storytelling,
2022, arXiv:2208.12981.
[68] J. Zhao et al., “ChartStory: Automated partitioning, layout, and caption-
ing of charts into comic-style narratives, 2021, arXiv:2103.03996.
[69] Y. Wang et al., “Datashot: Automatic generation of fact sheets from tab-
ular data, IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 895–905,
jan. 2020.
[70] D. Shi, X. Xu, F. Sun, Y. Shi, and N. Cao, “Calliope: Automatic visual data
story generation from a spreadsheet, IEEE Trans. Vis. Comput. Graph.,
vol. 27, no. 2, pp. 453–463, Feb. 2021.
[71] D. Seyser and M. Zeiller, “Scrollytelling–an analysis of visual story-
telling in online journalism, in Proc. 22nd Int. Conf. Inf. Visual., 2018,
pp. 401–406.
[72] A. Godulla and C. Wolf, Digitale Langformen Im Journalismus Und
Corporate Publishing. Berlin, Germany: Springer, 2017.
[73] M. Elias, A. James, S. Lohmann, S. Auer, and M. Wald, “Towards an
open authoring tool for accessible slide presentations, in Proc. Int. Conf.
Comput. Helping People Special Needs, Springer, 2018, pp. 172–180.
CHEN et al.: HOW DOES AUTOMATION SHAPE THE PROCESS OF NARRATIVE VISUALIZATION: A SURVEY OF TOOLS 4447
[74] J. Hullman, S. Drucker, N. H. Riche, B. Lee, D. Fisher, and E. Adar, “A
deeper understanding of sequence in narrative visualization, IEEE Trans.
Visual. Comput. Graph., vol. 19, no. 12, pp. 2406–2415, Dec. 2013.
[75] R. E. Roth, “Cartographic design as visual storytelling: Synthesis and
review of map-based narratives, genres, and tropes, Cartographic J.,
vol. 58, no. 1, pp. 83–114, 2021.
[76] M. Conlen and J. Heer, “Idyll: A markup language for authoring and
publishing interactive articles on the web, in Proc. 31st Annu. ACM
Symp. User Interface Softw. Technol., 2018, pp. 977–989.
[77] N. Sultanum, F. Chevalier, Z. Bylinskii, and Z. Liu, “Leveraging text-
chart links to support authoring of data-driven articles with vizflow, in
Proc. CHI Conf. Hum. Factors Comput. Syst., 2021, pp. 1–17.
[78] T. Winters and Kory W. Mathewson, Automatically generating engaging
presentation slide d ecks, in Proc. Int. Conf. Comput. Intell. Music,
Sound, Art Des. (Part EvoStar), Springer, 2019, pp. 127–141.
[79] J. Lu et al., Automatic generation of unit visualization-based scrol-
lytelling for impromptu data facts delivery, in Proc. IEEE 14th Pacific
Visual. Symp., 2021, pp. 21–30.
[80] F. Amini, N. H. Riche, B. Lee, C. Hurter, and P. Irani, “Understanding data
videos: Looking at narrative visualization through the cinematography
lens, in Proc. 33rd Annu. ACM Conf. Hum. Factors Comput. Syst., 2015,
pp. 1459–1468.
[81] R. Cao et al., “Examining the use of narrative constructs in data videos,
Vis. Informat., vol. 4, no. 1, pp. 8–22, 2020.
[82] X. Xu, L. Yang, D. Yip, M. Fan, Z. Wei, and H. Qu, “From ‘wow’ to
‘why’: Guidelines for creating the opening of a data video with cinematic
styles, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2022, pp. 1–20.
[83] J. Thompson, Z. Liu, W. Li, and J. Stasko, “Understanding the design
space and authoring paradigms for animated data graphics, Comput.
Graph. Forum, vol. 39, no. 3, pp. 207–218, 2020.
[84] S. Sallam, Y. Sakamoto, J. Leboe-McGowan, C. Latulipe, and P. Irani,
“Towards design guidelines for effective health-related data videos: An
empirical investigation of affect, personality, and video content, in Proc.
CHI Conf. Hum. Factors Comput. Syst., 2022, pp. 1–22.
[85] Y. Wang, Z. Chen, Q. Li, X. Ma, Q. Luo, and H. Qu, “Animated narrative
visualization for video clickstream data, in Proc. Symp. Visual.,ACM,
2016, pp. 1–8.
[86] W. Li, Y. Wang, H. Zhang, and H. Qu, “Improving engagement of
animated visualization with visual foreshadowing, in Proc. IEEE Visual.
Conf., 2020, pp. 141–145.
[87] X. Shu, A. Wu, J. Tang, B. Bach, Y. Wu, and H. Qu, “What makes a
data-GIF understandable? IEEE Trans. Vis. Comput. Graph., vol. 27,
no. 2, pp. 1492–1502, Feb. 2021.
[88] Y. Shi, X. Lan, J. Li, Z. Li, and N. Cao, “Communicating with motion: A
design space for animated visual narratives in data videos, in Proc. CHI
Conf. Hum. Factors Comput. Syst., 2021, pp. 1–13.
[89] T. Tang, J. Tang, J. Hong, L. Yu, P. Ren, and Y. Wu, “Design guidelines
for augmenting short-form videos using animated data visualizations, J.
Visual., vol. 23, no. 4, pp. 707–720, 2020.
[90] F. Amini, N. H. Riche, B. Lee, A. Monroy-Hernandez, and P. Irani,
Authoring data-driven videos with dataclips, IEEE Trans. Vis. Comput.
Graph., vol. 23, no. 1, pp. 501–510, Jan. 2017.
[91] X. Lan, Y. Shi, Y. Wu, X. Jiao, and N. Cao, “Kineticharts: Augmenting
affective expressiveness of charts in data stories with animation design,
IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 933–943, Jan. 2022.
[92] Z. Chen et al., “Augmenting sports videos with viscommentator, IEEE
Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 824–834, Jan. 2022.
[93] J. R. Thompson, Z. Liu, and J. Stasko, “Data animator: Authoring
expressive animated data graphics, in Proc. CHI Conf. Hum. Factors
Comput. Syst., 2021, pp. 1–18.
[94] Y. Wang, Y. Gao, R. Huang, W. Cui, H. Zhang, and D. Zhang, Animated
presentation of static infographics with infomotion, Comput. Graph.
For u m, vol. 40, no. 3, pp. 507–518, 2021.
[95] Y. Kim and J. Heer, “Gemini 2: Generating keyframe-oriented animated
transitions between statistical graphics, in
Proc. IEEE Visual. Conf.,
2021, pp. 201–205.
[96] T. Ge, B. Lee, and Y. Wang, “Cast: Authoring data-driven chart an-
imations, in Proc. CHI Conf. Hum. Factors Comput. Syst., 2021,
pp. 1–15.
[97] D. Shi, F. Sun, X. Xu, X. Lan, D. Gotz, and N. Cao, “Autoclips: An
automatic approach to video generation from data facts, Comput. Graph.
For u m, vol. 40, no. 3, pp. 495–505, 2021.
[98] E. Segel and J. Heer, “Narrative visualization: Telling stories with
data, IEEE Trans. Vis. Comput. Graph., vol. 16, no. 6, pp. 1139–1148,
Nov./Dec. 2010.
[99] C. Tong et al., “Storytelling and visualization: An extended survey,
Information, vol. 9, no. 3, 2018, Art. no. 65.
[100] A. Botero, K.-H. Kommonen, and S. Marttila, “Expanding design space:
Design-in-use activities and strategies, in Proc. Des. Complexity - DRS
Int. Conf., 2010, pp. 1–12.
[101] G. Fischer and E. Giaccardi, “Meta-design: A framework for the future
of end-user development, In End User Development, Berlin, Germany:
Springer, 2006, pp. 427–457.
[102] B. Westerlund, “Design space conceptual tool–grasping the design pro-
cess, in Proc. Nordic Des. Res. Conf., 2005, pp. 1–7.
[103] Hans-Jörg Schulz, “Explorative graph visualization, PhD dissertation,
University of Rostock, School of Computer Science and Electrical Engi-
neering, 2010.
[104] H.-J. Schulz, S. Hadlak, and H. Schumann, “The design space of implicit
hierarchy visualization: A survey, IEEE Trans. Vis. Comput. Graph.,
vol. 17, no. 4, pp. 393–411, Apr. 2011.
[105] P. Zikas et al., “Immersive visual scripting based on VR software design
patterns for experiential training, Vis. Comput., vol. 36, no. 10, pp. 1965–
1977, 2020.
[106] R. Brath and M. Matusiak, “Automated annotations, in Proc. IEEE VIS
Workshop Visual. Commun., 2018, pp. 1–4.
[107] C. C. Marshall, “Annotation: From paper books to the digital library, in
Proc. 2nd ACM Int. Conf. Digit. Libraries, 1997, pp. 131–140.
[108] V. Gómez-Rubio, “ggplot2-elegant graphics for data analysis, J. Stat.
Softw., vol. 77, pp. 1–3, 2017.
[109] M. Bostock, V. Ogievetsky, and J. Heer, “D
3
data-driven documents,
IEEE Trans. Vis. Comput. Graph., vol. 17, no. 12, pp. 2301–2309,
Dec. 2011.
[110] Tableau, 2006. Accessed: Feb. 14, 2022. [Online]. Available: https://
www.tableau.com/
[111] C. Lee, T. Yang, G. D. Inchoco, G. M. Jones, and A. Satyanarayan, “Viral
visualizations: How coronavirus skeptics use orthodox data practices to
promote unorthodox science online, in Proc. CHI Conf. Hum. Factors
Comput. Syst., 2021, pp. 1–18.
[112] B. YedendraD. ShrinivasanGotzy, and J. Lu, “Connecting the dots in
visual analysis, in Proc. IEEE Symp. Vis. Analytics Sci. Technol., 2009,
pp. 123–130.
[113] C. Kittivorawong, D. Moritz, K. Wongsuphasawat, and J. Heer, “Fast and
flexible overlap detection for chart labeling with occupancy bitmap, in
Proc. IEEE Visual. Conf., 2020, pp. 101–105.
[114] J. J. Otten, K. Cheng, and A. Drewnowski, “Infographics and public
policy: Using data visualization to convey complex information, Health
Affairs, vol. 34, no. 11, pp. 1901–1907 2015.
[115] H. Naparin and A. Binti Saad, “Infographics in education: Review on
infographics design, Int. J. Multimedia Appl., vol. 9, no. 4, pp. 15–24,
2017.
[116] J. MichaelAlbers, “Infographics: Horrid chartjunk or quality communi-
cation, in Proc. IEEE Int. Professional Commun. Conf., 2014, pp. 1–4.
[117] Adobe Systems Incorporated, “Adobe illustrator, 2023. Accessed:
Feb. 14, 2023. [Online]. Available: https://www.adobe.com/products/
illustrator.html
[118] B. Coding, “Sketch - professional digital design for mac, 2010. Ac-
cessed: Feb. 14, 2023. [Online]. Available: https://www.sketch.com/
[119] Visme, 2013. Accessed: Jan. 07, 2022. [Online]. Available: https://www.
visme.co/make-infographics/
[120] Infogram, 2012. Accessed: Jan. 07, 2022. [Online]. Available: https://
infogram.com/
[121] Canva, 2018. Accessed: Jan. 07, 2022. [Online]. Available: https://www.
canva.cn/create/
[122] Webalon, Tiki-toki, 2011. Accessed: Feb. 14, 2023. [Online]. Available:
http://tiki-toki.com/
[123] D. Dukes and B. Heinley, Dipity, 2010. Accessed: Feb. 14, 2023. [Online].
Available: https://www.timetoast.com/timelines/dipity-online-timeline
[124] Northwestern University Knight Lab, Timelinejs, 2013. Accessed: Feb.
14, 2023. [Online]. Available: http://timeline.knightlab.com/
[125] A. Shaw, J. Larson, and B. Welsh, Timelinesetter, 2011. Accessed: Feb.
14, pp. 2023–02-14. [Online]. Available: http:// propublica.github.io/
timeline-setter/
[126] G. Genette, Narrative Discourse: An Essay in Method , vol. 3. Ithaca, NY,
USA: Cornell Univ. Press, 1983.
[127] O. Kashan, “Timeline of the universe, 2012. Accessed: Sep. 12,
2022. [Online]. Available: https://www.informationisbeautifulawards.
com/showcase/456-timeline-of-the-universe
[128] MicrosoftPowerpoint, 2016. Accessed: Feb. 14, 2022. [Online]. Avail-
able: https://office.live.com/start/powerpoint.aspx
4448 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 30, NO. 8, AUGUST 2024
[129] A. Brand et al., “Medical graphic narratives to improve patient com-
prehension and periprocedural anxiety before coronary angiography and
percutaneous coronary intervention: A randomized trial, Ann. Intern.
Med., vol. 170, no. 8, pp. 579–581, 2019.
[130] S. McKenna, D. Mazur, J. Agutter, and M. Meyer, “Design activity
framework for visualization design, IEEE Trans. Vis. Comput. Graph.,
vol. 20, no. 12, pp. 2191–2200, Dec. 2014.
[131] S. McKenna, N. H. Riche, B. Lee, J. Boy, and M. Meyer, “Visual
narrative flow: Exploring factors shaping data visualization story read-
ing experiences, Comput. Graph. Forum, vol. 36, no. 3, pp. 377–387,
2017.
[132] AppleKeynote, 2003. Accessed: 2022–02-14. [Online]. Available: https:
//www.apple.com/keynote/
[133] Google, Google slides, 2006. Accessed: Feb. 14, 2022. [Online]. Avail-
able: https://www.google.com/slides/about/
[134] S. Bocklandt, G. Verbruggen, and T. Winters, “Sandslide: Automatic
slideshow normalization, in Proc. Int. Conf. Document Anal. Recognit.,
Springer, 2021, pp. 445–461.
[135] M. Leake, H. Valentina Shin, J. O. Kim, and M. Agrawala, “Generating
audio-visual slideshows from text articles using word concreteness, in
Proc. CHI Conf. Hum. Factors Comput. Syst., 2020, pp. 1–11.
[136] Z. Liu et al., “Data illustrator: Augmenting vector d esign tools with lazy
data binding for expressive visualization authoring, in Proc. CHI Conf.
Hum. Factors Comput. Syst., 2018, pp. 1–13.
[137] F. Suprata, “Data storytelling with dashboard: Accelerating understand-
ing through data visualization in financial technology company case
study, J. Metris, vol. 20, no. 1, pp. 1–10, 2019.
[138] M. Sedlmair, M. Meyer, and T. Munzner, “Design study methodology:
Reflections from the trenches and the stacks, IEEE Trans. Vis. Comput.
Graph., vol. 18, no. 12, pp. 2431–2440, 2012.
[139] M. Oppermann and T. Munzner, “Data-first visualization design stud-
ies, in Proc. IEEE Workshop Eval. Beyond-Methodological Approaches
Visual., 2020, pp. 74–80.
[140] G. M. F. Nieto, K. Kitto, S. B. Shum, and R. Martinez-Maldonado,
“Beyond the learning analytics dashboard: Alternative ways to com-
municate student data insights combining visualisation, narrative and
storytelling, in Proc. 12th Int. Learn. Analytics Knowl. Conf., 2022,
pp. 219–229.
[141] P. Isenberg, B. Lee, H. Qu, and M. Cordeil, “Immersive visual data
stories, in Immersive Analytics, Berlin, Germany: Springer, 2018,
pp. 165–184.
[142] M. Karyda, D. Wilde, and M. G. Kjærsgaard, “Narrative physicalization:
Supporting interactive engagement with personal data, IEEE Comput.
Graph. Appl., vol. 41, no. 1, pp. 74–86, Jan./Feb. 2021.
[143] T. Hogan and E. Hornecker, “Towards a design space for multisensory
data representation, Interacting Comput., vol. 29, no. 2, pp. 147–167,
2017.
[144] P. Dragicevic, Y. Jansen, and A. V. Moere, “Data physicalization, in
Handbook of Human Computer Interaction, Berlin, Germany: Springer,
2020, pp. 1–51.
[145] P. Zhang, C. Li, and C. Wang, “VisCode: Embedding information in
visualization images using encoder-decoder network, IEEE Trans. Vis.
Comput. Graph., vol. 27, no. 2, pp. 326–336, Feb. 2021.
[146] J. Fu et al., “Chartem: Reviving chart images with data embedding, IEEE
Trans. Vis. Comput. Graph., vol. 27, no. 2, pp. 337–346, Feb. 2021.
[147] J. Heer, F. B. Viégas, and M. Wattenberg, “Voyagers and voyeurs: Sup-
porting asynchronous collaborative information visualization, in Proc.
SIGCHI Conf. Hum. Factors Comput. Syst., 2007, pp. 1029–1038.
[148] M. Vartak, S. Rahman, S. Madden, A. G. Parameswaran, and N. Polyzotis,
“SeeDB: Efficient data-driven visualization recommendations to support
visual analytics. Proc. VLDB Endowment Int. Conf. Very Large Data
Bases, vol. 8, pp. 2182–2193, 2015.
[149] Y. Luo, X. Qin, N. Tang, and G. Li, “DeepEye: Towards automatic
data visualization, in Proc. IEEE 34th Int. Conf. Data Eng., 2018,
pp. 101–112.
[150] K.-L. Ma, I. Liao, J. Frazier, H. Hauser, and H.-N. Kostis, “Scientific
storytelling using visualization, IEEE Comput. Graph. Appl., vol. 32,
no. 1, pp. 12–19, Jan./Feb. 2011.
Qing Chen received the BEng degree from the De-
partment of Computer Science, Zhejiang University,
and the PhD degree from the Department of Computer
Science and Engineering, Hong Kong University of
Science and Technology (HKUST). After receiving
the PhD degree, she worked as a postdoc with Inria
and Ecole Polytechnique. She is currently an assistant
professor with Tongji University. Her research inter-
ests include information visualization, visual analyt-
ics, human-computer interaction, online education,
visual storytelling, intelligent healthcare and design.
Shixiong Cao received the master’s degree in design
from Sangmyung University, South Korea, in 2019,
and the PhD degree from Sungkyunkwan University,
South Korea, in 2023. Currently, he works as a post-
doctoral researcher with Tongji University, and his
research interests include information design, narra-
tive visualization design, and user experience design.
Jiazhe Wang received the master’s degree from the
Department of Computer Science, University of Ox-
ford. He is currently a data and front-end technologist
with Ant Group, a core member of the data visualiza-
tion team AntV. He is also a tech leader of the aug-
mented analytics team for the internal BI product of
Ant Group. His research interests include automated
visualization, augmented analytics and narrative vi-
sualization.
Nan Cao received the PhD degree in computer sci-
ence and engineering from the Hong Kong University
of Science and Technology (HKUST), Hong Kong,
China, in 2012. He is currently a professor with
Tongji University and the assistant dean of the Tongji
College of Design and Innovation. He also directs
the Tongji Intelligent Big Data Visualization Lab
(iDV
x
Lab) and conducts interdisciplinary research
across multiple fields, including data visualization,
human computer interaction, machine learning, and
data mining. He was a research staff member with
the IBM T.J. Watson Research Center, New York, NY, USA before joining the
Tongji faculty, in 2016.